home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Internet Surfer 2.0
/
Internet Surfer 2.0 (Wayzata Technology) (1996).iso
/
pc
/
text
/
mac
/
faqs.221
< prev
next >
Wrap
Text File
|
1996-02-12
|
29KB
|
722 lines
Frequently Asked Questions (FAQS);faqs.221
Besides listing frequently-asked questions, this article also summarizes
frequently-posted answers. Even if you know all the answers, it's worth
skimming through this list once in a while, so that when you see one of
its questions unwittingly posted, you won't have to waste time
answering.
This article is always being improved. Your input is welcomed. Send
your comments to scs@adam.mit.edu, scs%adam.mit.edu@mit.edu, and/or
mit-eddie!adam.mit.edu!scs; this article's From: line may be unusable.
The questions answered here are divided into several categories:
1. Null Pointers
2. Arrays and Pointers
3. Memory Allocation
4. Expressions
5. ANSI C
6. C Preprocessor
7. Variable-Length Argument Lists
8. Boolean Expressions and Variables
9. Structs, Enums, and Unions
10. Declarations
11. Stdio
12. Library Subroutines
13. Lint
14. Style
15. Floating Point
16. System Dependencies
17. Miscellaneous (Fortran to C converters, YACC grammars, etc.)
Herewith, some frequently-asked questions and their answers:
Section 1. Null Pointers
1.1: What is this infamous null pointer, anyway?
A: The language definition states that for each pointer type, there
is a special value -- the "null pointer" -- which is
distinguishable from all other pointer values and which is not
the address of any object. That is, the address-of operator &
will never yield a null pointer, nor will a successful call to
malloc. (malloc returns a null pointer when it fails, and this
is a typical use of null pointers: as a "special" pointer value
with some other meaning, usually "not allocated" or "not
pointing anywhere yet.")
A null pointer is conceptually different from an uninitialized
pointer. A null pointer is known not to point to any object; an
uninitialized pointer might point anywhere. See also questions
3.1, 3.9, and 17.1.
As mentioned in the definition above, there is a null pointer
for each pointer type, and the internal values of null pointers
for different types may be different. Although programmers need
not know the internal values, the compiler must always be
informed which type of null pointer is required, so it can make
the distinction if necessary (see below).
References: K&R I Sec. 5.4 pp. 97-8; K&R II Sec. 5.4 p. 102; H&S
Sec. 5.3 p. 91; ANSI Sec. 3.2.2.3 p. 38.
1.2: How do I "get" a null pointer in my programs?
A: According to the language definition, a constant 0 in a pointer
context is converted into a null pointer at compile time. That
is, in an initialization, assignment, or comparison when one
side is a variable or expression of pointer type, the compiler
can tell that a constant 0 on the other side requests a null
pointer, and generate the correctly-typed null pointer value.
Therefore, the following fragments are perfectly legal:
char *p = 0;
if(p != 0)
However, an argument being passed to a function is not
necessarily recognizable as a pointer context, and the compiler
may not be able to tell that an unadorned 0 "means" a null
pointer. For instance, the Unix system call "execl" takes a
variable-length, null-pointer-terminated list of character
pointer arguments. To generate a null pointer in a function
call context, an explicit cast is typically required, to force
the 0 to be in a pointer context:
execl("/bin/sh", "sh", "-c", "ls", (char *)0);
If the (char *) cast were omitted, the compiler would not know
to pass a null pointer, and would pass an integer 0 instead.
(Note that many Unix manuals get this example wrong.)
When function prototypes are in scope, argument passing becomes
an "assignment context," and most casts may safely be omitted,
since the prototype tells the compiler that a pointer is
required, and of which type, enabling it to correctly convert
unadorned 0's. Function prototypes cannot provide the types for
variable arguments in variable-length argument lists, however,
so explicit casts are still required for those arguments. It is
safest always to cast null pointer function arguments, to guard
against varargs functions or those without prototypes, to allow
interim use of non-ANSI compilers, and to demonstrate that you
know what you are doing. (Incidentally, it's also a simpler
rule to remember.)
Summary:
Unadorned 0 okay: Explicit cast required:
initialization function call,
no prototype in scope
assignment
variable argument in
comparison varargs function call
function call,
prototype in scope,
fixed argument
References: K&R I Sec. A7.7 p. 190, Sec. A7.14 p. 192; K&R II
Sec. A7.10 p. 207, Sec. A7.17 p. 209; H&S Sec. 4.6.3 p. 72; ANSI
Sec. 3.2.2.3 .
1.3: What is NULL and how is it #defined?
A: As a matter of style, many people prefer not to have unadorned
0's scattered throughout their programs. For this reason, the
preprocessor macro NULL is #defined (by <stdio.h> or
<stddef.h>), with value 0 (or (void *)0, about which more
later). A programmer who wishes to make explicit the
distinction between 0 the integer and 0 the null pointer can
then use NULL whenever a null pointer is required. This is a
stylistic convention only; the preprocessor turns NULL back to 0
which is then recognized by the compiler (in pointer contexts)
as before. In particular, a cast may still be necessary before
NULL (as before 0) in a function call argument. (The table
under question 1.2 above applies for NULL as well as 0.)
NULL should _only_ be used for pointers; see question 1.8.
References: K&R I Sec. 5.4 pp. 97-8; K&R II Sec. 5.4 p. 102; H&S
Sec. 13.1 p. 283; ANSI Sec. 4.1.5 p. 99, Sec. 3.2.2.3 p. 38,
Rationale Sec. 4.1.5 p. 74.
1.4: How should NULL be #defined on a machine which uses a nonzero
bit pattern as the internal representation of a null pointer?
A: Programmers should never need to know the internal
representation(s) of null pointers, because they are normally
taken care of by the compiler. If a machine uses a nonzero bit
pattern for null pointers, it is the compiler's responsibility
to generate it when the programmer requests, by writing "0" or
"NULL," a null pointer. Therefore, #defining NULL as 0 on a
machine for which internal null pointers are nonzero is as valid
as on any other, because the compiler must (and can) still
generate the machine's correct null pointers in response to
unadorned 0's seen in pointer contexts.
1.5: If NULL were defined as follows:
#define NULL (char *)0
wouldn't that make function calls which pass an uncast NULL
work?
A: Not in general. The problem is that there are machines which
use different internal representations for pointers to different
types of data. The suggested #definition would make uncast NULL
arguments to functions expecting pointers to characters to work
correctly, but pointer arguments to other types would still be
problematical, and legal constructions such as
FILE *fp = NULL;
could fail.
Nevertheless, ANSI C allows the alternate
#define NULL ((void *)0)
definition for NULL. Besides helping incorrect programs to work
(but only on machines with homogeneous pointers, thus
questionably valid assistance) this definition may catch
programs which use NULL incorrectly (e.g. when the ASCII NUL
character was really intended; see question 1.8).
1.6: I use the preprocessor macro
#define Nullptr(type) (type *)0
to help me build null pointers of the correct type.
A: This trick, though popular in some circles, does not buy much.
It is not needed in assignments and comparisons; see question
1.2. It does not even save keystrokes. Its use suggests to the
reader that the author is shaky on the subject of null pointers,
and requires the reader to check the #definition of the macro,
its invocations, and _all_ other pointer usages much more
carefully. See also question 8.1.
1.7: Is the abbreviated pointer comparison "if(p)" to test for non-
null pointers valid? What if the internal representation for
null pointers is nonzero?
A: When C requires the boolean value of an expression (in the if,
while, for, and do statements, and with the &&, ||, !, and ?:
operators), a false value is produced when the expression
compares equal to zero, and a true value otherwise. That is,
whenever one writes
if(expr)
where "expr" is any expression at all, the compiler essentially
acts as if it had been written as
if(expr != 0)
Substituting the trivial pointer expression "p" for "expr," we
have
if(p) is equivalent to if(p != 0)
and this is a comparison context, so the compiler can tell that
the (implicit) 0 is a null pointer, and use the correct value.
There is no trickery involved here; compilers do work this way,
and generate identical code for both statements. The internal
representation of a pointer does _not_ matter.
The boolean negation operator, !, can be described as follows:
!expr is essentially equivalent to expr?0:1
It is left as an exercise for the reader to show that
if(!p) is equivalent to if(p == 0)
"Abbreviations" such as if(p), though perfectly legal, are
considered by some to be bad style.
See also question 8.2.
References: K&R II Sec. A7.4.7 p. 204; H&S Sec. 5.3 p. 91; ANSI
Secs. 3.3.3.3, 3.3.9, 3.3.13, 3.3.14, 3.3.15, 3.6.4.1, and
3.6.5 .
1.8: If "NULL" and "0" are equivalent, which should I use?
A: Many programmers believe that "NULL" should be used in all
pointer contexts, as a reminder that the value is to be thought
of as a pointer. Others feel that the confusion surrounding
"NULL" and "0" is only compounded by hiding "0" behind a
#definition, and prefer to use unadorned "0" instead. There is
no one right answer. C programmers must understand that "NULL"
and "0" are interchangeable and that an uncast "0" is perfectly
acceptable in initialization, assignment, and comparison
contexts. Any usage of "NULL" (as opposed to "0") should be
considered a gentle reminder that a pointer is involved;
programmers should not depend on it (either for their own
understanding or the compiler's) for distinguishing pointer 0's
from integer 0's.
NULL should _not_ be used when another kind of 0 is required,
even though it might work, because doing so sends the wrong
stylistic message. (ANSI allows the #definition of NULL to be
(void *)0, which will not work in non-pointer contexts.) In
particular, do not use NULL when the ASCII null character (NUL)
is desired. Provide your own definition
#define NUL '\0'
if you must.
Reference: K&R II Sec. 5.4 p. 102.
1.9: But wouldn't it be better to use NULL (rather than 0) in case
the value of NULL changes, perhaps on a machine with nonzero
null pointers?
A: No. Although symbolic constants are often used in place of
numbers because the numbers might change, this is _not_ the
reason that NULL is used in place of 0. Once again, the
language guarantees that source-code 0's (in pointer contexts)
generate null pointers. NULL is used only as a stylistic
convention.
1.10: I'm confused. NULL is guaranteed to be 0, but the null pointer
is not?
A: When the term "null" or "NULL" is casually used, one of several
things may be meant:
1. The conceptual null pointer, the abstract language
concept defined in question 1.1. It is implemented
with...
2. The internal (or run-time) representation of a null
pointer, which may or may not be all-bits-0 and which
may be different for different pointer types. The
actual values should be of concern only to compiler
writers. Authors of C programs never see them, since
they use...
3. The source code syntax for null pointers, which is the
single character "0". It is often hidden behind...
4. The NULL macro, which is #defined to be "0" or
"(void *)0". Finally, as red herrings, we have...
5. The ASCII null character (NUL), which does have all bits
zero, but has no relation to the null pointer except in
name; and...
6. The "null string," which is another name for an empty
string (""). The term "null string" can be confusing in
C (and should perhaps be avoided), because it involves a
null ('\0') character, but not a null pointer, which
brings us full circle...
This article always uses the phrase "null pointer" (in lower
case) for sense 1, the character "0" for sense 3, and the
capitalized word "NULL" for sense 4.
1.11: Why is there so much confusion surrounding null pointers? Why
do these questions come up so often?
A: C programmers traditionally like to know more than they need to
about the underlying machine implementation. The fact that null
pointers are represented both in source code, and internally to
most machines, as zero invites unwarranted assumptions. The use
of a preprocessor macro (NULL) suggests that the value might
change later, or on some weird machine. The construct
"if(p == 0)" is easily misread as calling for conversion of p to
an integral type, rather than 0 to a pointer type, before the
comparison. Finally, the distinction between the several uses
of the term "null" (listed above) is often overlooked.
One good way to wade out of the confusion is to imagine that C
had a keyword (perhaps "nil", like Pascal) with which null
pointers were requested. The compiler could either turn "nil"
into the correct type of null pointer, when it could determine
the type from the source code, or complain when it could not.
Now, in fact, in C the keyword for a null pointer is not "nil"
but "0", which works almost as well, except that an uncast "0"
in a non-pointer context generates an integer zero instead of an
error message, and if that uncast 0 was supposed to be a null
pointer, the code may not work.
1.12: I'm still confused. I just can't understand all this null
pointer stuff.
A: Follow these two simple rules:
1. When you want to refer to a null pointer in source code,
use "0" or "NULL".
2. If the usage of "0" or "NULL" is an argument in a
function call, cast it to the pointer type expected by
the function being called.
The rest of the discussion has to do with other people's
misunderstandings, or with the internal representation of null
pointers (which you shouldn't need to know), or with ANSI C
refinements. Understand questions 1.1, 1.2, and 1.3, and
consider 1.8 and 1.11, and you'll do fine.
1.13: Given all the confusion surrounding null pointers, wouldn't it
be easier simply to require them to be represented internally by
zeroes?
A: If for no other reason, doing so would be ill-advised because it
would unnecessarily constrain implementations which would
otherwise naturally represent null pointers by special, nonzero
bit patterns, particularly when those values would trigger
automatic hardware traps for invalid accesses.
Besides, what would this requirement really accomplish? Proper
understanding of null pointers does not require knowledge of the
internal representation, whether zero or nonzero. Assuming that
null pointers are internally zero does not make any code easier
to write (except for a certain ill-advised usage of calloc; see
question 3.9). Known-zero internal pointers would not obviate
casts in function calls, because the _size_ of the pointer might
still be different from that of an int. (If "nil" were used to
request null pointers rather than "0," as mentioned in question
1.11, the urge to assume an internal zero representation would
not even arise.)
1.14: Seriously, have any actual machines really used nonzero null
pointers, or different representations for pointers to different
types?
A: The Prime 50 series used segment 07777, offset 0 for the null
pointer, at least for PL/I. Later models used segment 0, offset
0 for null pointers in C, necessitating new instructions such as
TCNP (Test C Null Pointer), evidently as a sop to all the extant
poorly-written C code which made incorrect assumptions. Older,
word-addressed Prime machines were also notorious for requiring
larger byte pointers (char *'s) than word pointers (int *'s).
Some Honeywell-Bull mainframes use the bit pattern 06000 for
(internal) null pointers.
The Symbolics Lisp Machine, a tagged architecture, does not even
have conventional numeric pointers; it uses the pair <NIL, 0>
(basically a nonexistent <object, offset> handle) as a C null
pointer.
Depending on the "memory model" in use, 80*86 processors (PC's)
may use 16 bit data pointers and 32 bit function pointers, or
vice versa.
Section 2. Arrays and Pointers
2.1: I had the definition char a[6] in one source file, and in
another I declared extern char *a. Why didn't it work?
A: The declaration extern char *a simply does not match the actual
definition. The type "pointer-to-type-T" is not the same as
"array-of-type-T." Use extern char a[].
References: CT&P Sec. 3.3 pp. 33-4, Sec. 4.5 pp. 64-5.
2.2: But I heard that char a[] was identical to char *a.
A: Not at all. (What you heard has to do with formal parameters to
functions; see question 2.4.) Arrays are not pointers. The
array declaration "char a[6];" requests that space for six
characters be set aside, to be known by the name "a." That is,
there is a location named "a" at which six characters can sit.
The pointer declaration "char *p;" on the other hand, requests a
place which holds a pointer. The pointer is to be known by the
name "p," and can point to any char (or contiguous array of
chars) anywhere.
As usual, a picture is worth a thousand words. The statements
char a[] = "hello";
char *p = "world";
would result in data structures which could be represented like
this:
+---+---+---+---+---+---+
a: | h | e | l | l | o |\0 |
+---+---+---+---+---+---+
+-----+ +---+---+---+---+---+---+
p: | *======> | w | o | r | l | d |\0 |
+-----+ +---+---+---+---+---+---+
It is important to remember that a reference like x[3] generates
different code depending on whether x is an array or a pointer.
Given the declarations above, when the compiler sees the
expression a[3], it emits code to start at the location "a,"
move three past it, and fetch the character there. When it sees
the expression p[3], it emits code to start at the location "p,"
fetch the pointer value there, add three to the pointer, and
finally fetch the character pointed to. In the example above,
both a[3] and p[3] happen to be the character 'l', but the
compiler gets there differently. (See also question 17.14.)
2.3: So what is meant by the "equivalence of pointers and arrays" in
C?
A: Much of the confusion surrounding pointers in C can be traced to
a misunderstanding of this statement. Saying that arrays and
pointers are "equivalent" does not by any means imply that they
are interchangeable.
"Equivalence" refers to the following key definition:
An lvalue of type array-of-T which appears in an
expression decays (with three exceptions) into a
pointer to its first element; the type of the
resultant pointer is pointer-to-T.
(The exceptions are when the array is the operand of a sizeof or
& operator, or is a literal string initializer for a character
array.)
As a consequence of this definition, there is not really any
difference in the behavior of the "array subscripting" operator
[] as it applies to arrays and pointers. In an expression of
the form a[i], the array reference "a" decays into a pointer,
following the rule above, and is then subscripted exactly as
would be a pointer variable in the expression p[i]. In either
case, the expression x[i] (where x is an array or a pointer) is,
by definition, exactly equivalent to *((x)+(i)).
References: K&R I Sec. 5.3 pp. 93-6; K&R II Sec. 5.3 p. 99; H&S
Sec. 5.4.1 p. 93; ANSI Sec. 3.2.2.1, Sec. 3.3.2.1, Sec. 3.3.6 .
2.4: Then why are array and pointer declarations interchangeable as
function formal parameters?
A: Since arrays decay immediately into pointers, an array is never
actually passed to a function. As a convenience, any parameter
declarations which "look like" arrays, e.g.
f(a)
char a[];
are treated by the compiler as if they were pointers, since that
is what the function will receive if an array is passed:
f(a)
char *a;
This conversion holds only within function formal parameter
declarations, nowhere else. If this conversion bothers you,
avoid it; many people have concluded that the confusion it
causes outweighs the small advantage of having the declaration
"look like" the call and/or the uses within the function.
References: K&R I Sec. 5.3 p. 95, Sec. A10.1 p. 205; K&R II
Sec. 5.3 p. 100, Sec. A8.6.3 p. 218, Sec. A10.1 p. 226; H&S
Sec. 5.4.3 p. 96; ANSI Sec. 3.5.4.3, Sec. 3.7.1, CT&P Sec. 3.3
pp. 33-4.
2.5: Why doesn't sizeof properly report the size of an array which is
a parameter to a function?
A: The sizeof operator reports the size of the pointer parameter
which the function actually receives (see question 2.4).
2.6: Someone explained to me that arrays were really just constant
pointers.
A: This is a bit of an oversimplification. An array name is
"constant" in that it cannot be assigned to, but an array is
_not_ a pointer, as the discussion and pictures in question 2.2
should make clear.
2.7: I came across some "joke" code containing the "expression"
5["abcdef"] . How can this be legal C?
A: Yes, Virginia, array subscripting is commutative in C. This
curious fact follows from the pointer definition of array
subscripting, namely that a[e] is exactly equivalent to
*((a)+(e)), for _any_ expression e and primary expression a, as
long as one of them is a pointer expression and one is integral.
This unsuspected commutativity is often mentioned in C texts as
if it were something to be proud of, but it finds no useful
application outside of the Obfuscated C Contest (see question
17.9).
References: ANSI Rationale Sec. 3.3.2.1 p. 41.
2.8: My compiler complained when I passed a two-dimensional array to
a routine expecting a pointer to a pointer.
A: The rule by which arrays decay into pointers is not applied
recursively. An array of arrays (i.e. a two-dimensional array
in C) decays into a pointer to an array, not a pointer to a
pointer. Pointers to arrays can be confusing, and must be
treated carefully. (The confusion is heightened by the
existence of incorrect compilers, including some versions of pcc
and pcc-derived lint's, which improperly accept assignments of
multi-dimensional arrays to multi-level pointers.) If you are
passing a two-dimensional array to a function:
int array[YSIZE][XSIZE];
f(array);
the function's declaration should match:
f(int a[][XSIZE]) {...}
or
f(int (*ap)[XSIZE]) {...} /* ap is a pointer to an array */
In the first declaration, the compiler performs the usual
implicit parameter rewriting of "array of array" to "pointer to
array;" in the second form the pointer declaration is explicit.
Since the called function does not allocate space for the array,
it does not need to know the overall size, so the number of
"rows," YSIZE, can be omitted. The "shape" of the array is
still important, so the "column" dimension XSIZE (and, for 3- or
more dimensional arrays, the intervening ones) must be included.
If a function is already declared as accepting a pointer to a
pointer, it is probably incorrect to pass a two-dimensional
array directly to it.
References: K&R I Sec. 5.10 p. 110; K&R II Sec. 5.9 p. 113.
2.9: How do I declare a pointer to an array?
A: Usually, you don't want to. Consider using a pointer to one of
the array's elements instead. Arrays of type T decay into
pointers to type T (see question 2.3), which is convenient;
subscripting or incrementing the resultant pointer accesses the
individual members of the array. True pointers to arrays, when
subscripted or incremented, step over entire arrays, and are
generally only useful when operating on arrays of arrays, if at
all. (See question 2.8 above.) When people speak casually of a
pointer to an array, they usually mean a pointer to its first
element.
If you really need to declare a pointer to an entire array, use
something like "int (*ap)[N];" where N is the size of the array.
(See also question 10.3.) If the size of the array is unknown,
N can be omitted, but the resulting type, "pointer to array of
unknown size," is useless.
2.10: How can I dynamically allocate a multidimensional array?
A: It is usually best to allocate an array of pointers, and then
initialize each pointer to a dynamically-allocated "row." The
resulting "ragged" array can save space, although it is not
necessarily contiguous in memory as a real array would be. Here
is a two-dimensional example:
int **array = (int **)malloc(nrows * sizeof(int *));
for(i = 0; i < nrows; i++)
array[i] = (int *)malloc(ncolumns * sizeof(int));
(In "real" code, of course, malloc would be declared correctly,
and each return value checked.)
You can keep the array's contents contiguous, while making later
reallocation of individual rows difficult, with a bit of
explicit pointer arithmetic:
int **array = (int **)malloc(nrows * sizeof(int *));
array[0] = (int *)malloc(nrows * ncolumns * sizeof(int));
for(i = 1; i < nrows; i++)
array[i] = array[0] + i * ncolumns;
In either case, the elements of the dynamic array can be
accessed with normal-looking array subscripts: array[i][j].
If the double indirection implied by the above schemes is for
some reason unacceptable, you can simulate a two-dimensional
array with a single, dynamically-allocated one-dimensional
array:
int *array = (int *)malloc(nrows * ncolumns * sizeof(int));
However, you must now perform subscript calculations manually,
accessing the i,jth element with array[i * ncolumns + j]. (A
macro can hide the explicit calculation, but invoking it then
requires parentheses and commas which don't look exactly like
multidimensional array subscripts.)
Finally, you can use pointers-to-arrays:
int (*array)[NCOLUMNS] =
(int (*)[NCOLUMNS])malloc(nrows * sizeof(*array));
, but the syntax gets horrific and all but one dimension must be
known at compile time.
With all of these techniques, you may of course need to remember
to free the arrays (which may take several steps) when they are
no longer needed. You must also be extremely cautious when
passing dynamically-allocated arrays down to other functions, if
those functions are also to accept conventional, statically-
allocated arrays (see question 2.8).
2.11: Here's a neat trick: if I write
int realarray[10];
int *array = &realarray[-1];
I can treat "array" as if it were a 1-based array.
A: This technique, though attractive, is not strictly conforming to
the C standards (in spite of its appearance in Numerical Recipes
in C). Pointer arithmetic is defined only as long as the
pointer points within the same allocated block of memory, or to
the imaginary "terminating" element one past it; otherwise, the
behavior is undefined, _even if the pointer is not
dereferenced_. The code above could fail if, while subtracting
the offset, an illegal address were generated (perhaps because
the address tried to "wrap around" past the beginning of some
memory segment).
References: ANSI Sec. 3.3.6 p. 48, Rationale Sec. 3.2.2.3 p. 38;
K&R II Sec. 5.3 p. 100, Sec. 5.4 pp. 102-3, Sec. A7.7 pp. 205-6.
2.12: I passed a pointer to a function which initialized it:
main()
{
int *ip;
f(ip);
return 0;
}
void f(ip)
int *ip;
{
static int dummy;
ip = &dummy;
*ip = 5;
}
, but the pointer in the caller was unchanged.
A: Did the function try to initialize the pointer itself, or just
what it pointed to? Remember that arguments in C are passed by
value. The called function altered only the passed copy of the
pointer. You'll want to pass the address of the pointer (the
function will end up accepting a pointer-to-a-pointer).
2.13: I have a char * pointer that happens to point to some ints, and
I want to step it over them. Why doesn't
((int *)p)++;
work?
A: In C, a cast operator does not mean "pretend these bits have a
different type, and treat them accordingly;" it is a conversion
operator, and by definition it yields an rvalue, which cannot be
assigned to, or incremented with ++. (It is an anomaly in pcc-
derived compilers, and an extension in gcc, that expressions
such as the above are ever accepted.) Say what you mean: use